We propose a selective encoding model to extend the sequence-to-sequenceframework for abstractive sentence summarization. It consists of a sentenceencoder, a selective gate network, and an attention equipped decoder. Thesentence encoder and decoder are built with recurrent neural networks. Theselective gate network constructs a second level sentence representation bycontrolling the information flow from encoder to decoder. The second levelrepresentation is tailored for sentence summarization task, which leads tobetter performance. We evaluate our model on the English Gigaword, DUC 2004 andMSR abstractive sentence summarization datasets. The experimental results showthat the proposed selective encoding model outperforms the state-of-the-artbaseline models.
展开▼